105 research outputs found

    AICropCAM: Deploying classification, segmentation, detection, and counting deep-learning models for crop monitoring on the edge

    Get PDF
    Precision Agriculture (PA) promises to meet the future demands for food, feed, fiber, and fuel while keeping their production sustainable and environmentally friendly. PA relies heavily on sensing technologies to inform site-specific decision supports for planting, irrigation, fertilization, spraying, and harvesting. Traditional point-based sensors enjoy small data sizes but are limited in their capacity to measure plant and canopy parameters. On the other hand, imaging sensors can be powerful in measuring a wide range of these parameters, especially when coupled with Artificial Intelligence. The challenge, however, is the lack of computing, electric power, and connectivity infrastructure in agricultural fields, preventing the full utilization of imaging sensors. This paper reported AICropCAM, a field-deployable imaging framework that integrated edge image processing, Internet of Things (IoT), and LoRaWAN for low-power, long-range communication. The core component of AICropCAM is a stack of four Deep Convolutional Neural Networks (DCNN) models running sequentially: CropClassiNet for crop type classification, CanopySegNet for canopy cover quantification, PlantCountNet for plant and weed counting, and InsectNet for insect identification. These DCNN models were trained and tested with \u3e43,000 field crop images collected offline. AICropCAM was embodied on a distributed wireless sensor network with its sensor node consisting of an RGB camera for image acquisition, a Raspberry Pi 4B single-board computer for edge image processing, and an Arduino MKR1310 for LoRa communication and power management. Our testing showed that the time to run the DCNN models ranged from 0.20 s for InsectNet to 20.20 s for CanopySegNet, and power consumption ranged from 3.68 W for InsectNet to 5.83 W for CanopySegNet. The classification model CropClassiNet reported 94.5 % accuracy, and the segmentation model CanopySegNet reported 92.83 % accuracy. The two object detection models PlantCountNet and InsectNet reported mean average precision of 0.69 and 0.02 for the test images. Predictions from the DCNN models were transmitted to the ThingSpeak IoT platform for visualization and analytics. We concluded that AICropCAM successfully implemented image processing on the edge, drastically reduced the amount of data being transmitted, and could satisfy the real-time need for decision-making in PA. AICropCAM can be deployed on moving platforms such as center pivots or drones to increase its spatial coverage and resolution to support crop monitoring and field operations

    Temporal dynamics of maize plant growth, water use, and leaf water content using automated high throughput RGB and hyperspectral imaging

    Get PDF
    Automated collection of large scale plant phenotype datasets using high throughput imaging systems has the potential to alleviate current bottlenecks in data-driven plant breeding and crop improvement. In this study, we demonstrate the characterization of temporal dynamics of plant growth and water use, and leaf water content of two maize genotypes under two different water treatments. RGB (Red Green Blue) images are processed to estimate projected plant area, which are correlated with destructively measured plant shoot fresh weight (FW), dry weight (DW) and leaf area. Estimated plant FW and DW, along with pot weights, are used to derive daily plant water consumption and water use efficiency (WUE) of the individual plants. Hyperspectral images of plants are processed to extract plant leaf reflectance and correlate with leaf water content (LWC). Strong correlations are found between projected plant area and all three destructively measured plant parameters (R2 \u3e 0.95) at early growth stages. The correlations become weaker at later growth stages due to the large difference in plant structure between the two maize genotypes. Daily water consumption (or evapotranspiration) is largely determined by water treatment, whereas WUE (or biomass accumulation per unit of water used) is clearly determined by genotype, indicating a strong genetic control of WUE. LWC is successfully predicted with the hyperspectral images for both genotypes (R2 = 0.81 and 0.92). Hyperspectral imaging can be a very powerful tool to phenotype biochemical traits of the whole maize plants, complementing RGB for plant morphological trait analysis

    Capturing Spatial Variability in Maize and Soybean using Stationary Sensor Nodes

    Get PDF
    • Irrigation in agriculture maximizes crop yield and improves food security globally • Irrigation scheduling is strongly based on the ability to accurately estimate the appropriate amount and timing of water application • The timing of the irrigation can best be informed through the crop canopy stress, and the amount of irrigation is informed through soil moisture depletion • Developing upper (non-water stressed) and lower (non-transpiring) baselines for irrigated and non-irrigated maize and soybean • Investigating the relationship between the canopy stress and the soil moisture stress The canopy temperature stress and soil moisture depletion had stronger correlation for non-irrigated treatments in soybean than maiz

    QARC: Video Quality Aware Rate Control for Real-Time Video Streaming via Deep Reinforcement Learning

    Full text link
    Due to the fluctuation of throughput under various network conditions, how to choose a proper bitrate adaptively for real-time video streaming has become an upcoming and interesting issue. Recent work focuses on providing high video bitrates instead of video qualities. Nevertheless, we notice that there exists a trade-off between sending bitrate and video quality, which motivates us to focus on how to get a balance between them. In this paper, we propose QARC (video Quality Awareness Rate Control), a rate control algorithm that aims to have a higher perceptual video quality with possibly lower sending rate and transmission latency. Starting from scratch, QARC uses deep reinforcement learning(DRL) algorithm to train a neural network to select future bitrates based on previously observed network status and past video frames, and we design a neural network to predict future perceptual video quality as a vector for taking the place of the raw picture in the DRL's inputs. We evaluate QARC over a trace-driven emulation. As excepted, QARC betters existing approaches.Comment: Accepted by ACM Multimedia 201

    Ag-IoT for crop and environment monitoring: Past, present, and future

    Get PDF
    CONTEXT: Automated monitoring of the soil-plant-atmospheric continuum at a high spatiotemporal resolution is a key to transform the labor-intensive, experience-based decision making to an automatic, data-driven approach in agricultural production. Growers could make better management decisions by leveraging the real-time field data while researchers could utilize these data to answer key scientific questions. Traditionally, data collection in agricultural fields, which largely relies on human labor, can only generate limited numbers of data points with low resolution and accuracy. During the last two decades, crop monitoring has drastically evolved with the advancement of modern sensing technologies. Most importantly, the introduction of IoT (Internet of Things) into crop, soil, and microclimate sensing has transformed crop monitoring into a quantitative and data-driven work from a qualitative and experience-based task. OBJECTIVE: Ag-IoT systems enable a data pipeline for modern agriculture that includes data collection, transmission, storage, visualization, analysis, and decision-making. This review serves as a technical guide for Ag-IoT system design and development for crop, soil, and microclimate monitoring. METHODS: It highlighted Ag-IoT platforms presented in 115 academic publications between 2011 and 2021 worldwide. These publications were analyzed based on the types of sensors and actuators used, main control boards, types of farming, crops observed, communication technologies and protocols, power supplies, and energy storage used in Ag-IoT platforms

    Field-Based Scoring of Soybean Iron Deficiency Chlorosis Using RGB Imaging and Statistical Learning

    Get PDF
    Iron deficiency chlorosis (IDC) is an abiotic stress in soybean that can cause significant biomass and yield reduction. IDC is characterized by stunted growth and yellowing and interveinal chlorosis of early trifoliate leaves. Scoring IDC severity in the field is conventionally done by visual assessment. The goal of this study was to investigate the usefulness of Red Green Blue (RGB) images of soybean plots captured under the field condition for IDC scoring. A total of 64 soybean lines with four replicates were planted in 6 fields over 2 years. Visual scoring (referred to as Field Score, or FS) was conducted at V3–V4 growth stage; and concurrently RGB images of the field plots were recorded with a high-throughput field phenotyping platform. A second set of IDC scores was done on the plot images (displayed on a computer screen) consistently by one person in the office (referred to as Office Score, or OS). Plot images were then processed to remove weeds and extract six color features, which were used to train computer-based IDC scoring models (referred to as Computer Score, or CS) using linear discriminant analysis (LDA) and support vector machine (SVM). The results showed that, in the fields where severe IDC symptoms were present, FS and OS were strongly positively correlated with each other, and both of them were strongly negatively correlated with yield. CS could satisfactorily predict IDC scores when evaluated using FS and OS as the reference (overall classification accuracy \u3e 81%). SVM models appeared to outperform LDA models; and the SVM model trained to predict IDC OS gave the highest prediction accuracy. It was anticipated that coupling RGB imaging from the high-throughput field phenotyping platform with real-time image processing and IDC CS models would lead to a more rapid, cost-effective, and objective scoring pipeline for soybean IDC field screening and breeding

    A multi-sensor system for high throughput field phenotyping in soybean and wheat breeding

    Get PDF
    Collecting plant phenotypic data with sufficient resolution (in both space and time) and accuracy represents a long standing challenge in plant science research, and has been a major limiting factor for the effective use of genomic data for crop improvement. This is particularly true in plant breeding where collecting large-scale field-based plant phenotypes can be very labor intensive and costly. In this paper we reported a multi-sensor system for high throughput phenotyping in plant breeding. The system comprised five sensor modules (ultrasonic distance sensors, thermal infrared radiometers, NDVI sensors, portable spectrometers, and RGB web cameras) to measure crop canopy traits from field plots. A GPS was used to geo-reference the sensor measurements. Two environmental sensors (a solar radiation sensor and air temperature/relative humidity sensor) were also integrated into the system to collect simultaneous environmental data. A LabVIEW program was developed to control and synchronize measurements from all sensor modules and stored sensor readings in the host computer. Canopy reflectance spectra (by portable spectrometers) were post processed to extract NDVI and red-edge NDVI spectral indices; and RGB images were post processed to extract canopy green pixel fraction (as a proxy for biomass). The sensor system was tested in a soybean and wheat field trial. The results showed strong correlations among the sensor-based plant traits at both early and late growing season. Significant correlations were also found between the sensor-based traits and final grain yield at the early season (Pearson’s correlation coefficient r ranged from 0.41 to 0.55) and late season (r from 0.55 to 0.70), suggesting the potential use of the sensor system to assist in phenotypic selection for plant breeding. The sensor system performed satisfactorily and robustly in the field tests. It was concluded that the sensor system could be a powerful tool for plant breeders to collect field-based, high throughput plant phenotyping data

    The fallacy of profitable green supply chains: The role of green information systems in attenuating the sustainability trade-offs

    Get PDF
    While green supply chain management (GSCM) has been studied extensively, a lack of a clear view on performance improvements arising from the adoption of GSCM practices obstructs a full understanding of resultant consequences. Moreover, there are still limited efforts to understand the contingent nature of how performance is improved in this context. This study aims to ascertain whether the GSCM implementation yields sustainability–profitability trade-offs and examine the moderating effects of green information systems (GIS) on performance improvements. Survey data were collected from 189 firms operating in the UK automotive industry and analyzed using moderated hierarchical regression. The results suggest that pursuing GSCM can bring trade-offs into play, demonstrating a paradoxical view of enhanced sustainability versus less profitability. The authors call this phenomenon the fallacy of profitable GSCM. Interestingly, high levels of GIS were found to positively moderate the relationships between GSCM practices and economic performance. This study contributes to the knowledge bank of GSCM by elucidating the mixed views about the GSCM adoption and its economic effects and refutes the fallacy that “low-hanging fruits” of GSCM are readily available. Second, this study offers new directions to balance the trade-offs between sustainability and profitability, contributing to the development of a more robust GSCM theory. Two important managerial contributions can be drawn from this study: (1) managers need to prioritize GSCM practices on the basis of having the most significant performance improvement; (2) they are encouraged to develop more robust GIS and exploit the capabilities of information sharing, supply chain traceability, and monitoring as a new pathway to attenuate sustainability trade-offs. Future studies are recommended to explore wider sectors and employ longitudinal or quasi-experimental designs to capture the effects of GSCM practices on performance over time

    DUET: Cross-modal Semantic Grounding for Contrastive Zero-shot Learning

    Full text link
    Zero-shot learning (ZSL) aims to predict unseen classes whose samples have never appeared during training. One of the most effective and widely used semantic information for zero-shot image classification are attributes which are annotations for class-level visual characteristics. However, the current methods often fail to discriminate those subtle visual distinctions between images due to not only the shortage of fine-grained annotations, but also the attribute imbalance and co-occurrence. In this paper, we present a transformer-based end-to-end ZSL method named DUET, which integrates latent semantic knowledge from the pre-trained language models (PLMs) via a self-supervised multi-modal learning paradigm. Specifically, we (1) developed a cross-modal semantic grounding network to investigate the model's capability of disentangling semantic attributes from the images; (2) applied an attribute-level contrastive learning strategy to further enhance the model's discrimination on fine-grained visual characteristics against the attribute co-occurrence and imbalance; (3) proposed a multi-task learning policy for considering multi-model objectives. We find that our DUET can achieve state-of-the-art performance on three standard ZSL benchmarks and a knowledge graph equipped ZSL benchmark. Its components are effective and its predictions are interpretable.Comment: AAAI 2023 (Oral). Repository: https://github.com/zjukg/DUE
    corecore